High performance Linux clusters - with OSCAR, Rocks, openMosix, and MPI
نویسنده
چکیده
Computing speed isn't just a convenience. Faster computers allow us to solve larger problems, and to find solutions more quickly, with greater accuracy, and at a lower cost. All this adds up to a competitive advantage. In the sciences, this may mean the difference between being the first to publish and not publishing. In industry, it may determine who's first to the patent office. Traditional high-performance clusters have proved their worth in a variety of uses—from predicting the weather to industrial design, from molecular dynamics to astronomical modeling. High-performance computing (HPC) has created a new approach to science—modeling is now a viable and respected alternative to the more traditional experiential and theoretical approaches. Clusters are also playing a greater role in business. High performance is a key issue in data mining or in image rendering. Advances in clustering technology have led to high-availability and load-balancing clusters. Clustering is now used for mission-critical applications such as web and FTP servers. For example, Google uses an ever-growing cluster composed of tens of thousands of computers. Because of the expanding role that clusters are playing in distributed computing, it is worth considering this question briefly. There is a great deal of ambiguity, and the terms used to describe clusters and distributed computing are often used inconsistently. This chapter doesn't provide a detailed taxonomy—it doesn't include a discussion of Flynn's taxonomy or of cluster topologies. This has been done quite well a number of times and too much of it would be irrelevant to the purpose of this book. However, this chapter does try to explain the language used. If you need more general information, see the Appendix A for other sources. particularly readable introduction. When computing, there are three basic approaches to improving performance—use a better algorithm, use a faster computer, or divide the calculation among multiple computers. A very common analogy is that of a horse-drawn cart. You can lighten the load, you can get a bigger horse, or you can get a team of horses. (We'll ignore the option of going into therapy and learning to live with what you have.) Let's look briefly at each of these approaches.
منابع مشابه
Benchmarking of PVM and LAM/MPI Using OSCAR, Rocks and Knoppix Clustering Tools
Parallel and distributed computing plays an important role by dividing a big process into many small processes running parallel with the help of number of processors. The communication in distributed and parallel processors takes place with the help of different API’s. In this paper the performance of two API’s i.e. PVM and LAM/MPI and three Clustering Tools OSCAR, Rocks and Knoppix is analyzed...
متن کاملParallel computing using MPI and OpenMP on self-configured platform, UMZHPC.
Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...
متن کاملMPI-IO: A Standard, Portable API for High-Performance Parallel I/O
MPI-IO, the I/O part of the MPI-2 standard, is a portable API for high-performance parallel I/O. It is speci cally designed to overcome the performance and portability limitations of the Unix-like APIs currently supported by most parallel le systems. We discuss the main features of MPI-IO and describe our MPI-IO implementation, ROMIO, which runs on most machines and le systems, including Linux ...
متن کاملNetwork Performance in High Performance Linux Clusters
Linux-based clusters have become more prevalent as a foundation for High Performance Computing (HPC) systems. With a better understanding of network performance in these environments, we can optimize configurations and develop better management and administration policies to improve operations. To assist in this process, we developed a network measurement tool to measure UDP, TCP and MPI commun...
متن کاملThe Integration of Scalable Systems Software with the OSCAR Clustering Toolkit
The Scientific Discovery Through Advanced Computing (SciDAC) project has implemented a Scalable Systems Software (SSS) center. The SSS center is charged with building a scalable and robust operating environment for high end computing systems. Part of the approach is building a general solution for multiple platforms, to include both clusters and individual machines. This starts a trend toward s...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005